I’m very excited about the course! Going to learn something cool and start actively using Github. My Github page.
First, let’s download the data, which represents the relationship between learning approaches and students’ achievements. It consists of 166 observations of 7 variables. Columns’ description:
learning2014 <- read.csv("/Users/anastasia/IODS-project/data/learning2014.csv")
dim(learning2014)
## [1] 166 7
str(learning2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
Now we can conduct a graphical analysis:
library(GGally)
## Loading required package: ggplot2
library(ggplot2)
# creates pairs plot
p <- ggpairs(learning2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))
p
The very first line of the pairs plot above shows the histogram of gender distribution (dichotomous variable) and genderwise box plots where the ends of the box are the upper and lower quartiles, and the median is marked by a vertical line inside the box. The first column shows genderwise distributions of “age”, “attitude”, “deep”, “stra”, “surf” and “points”. Pink color refers to females, blue to males.
From the rest of the plot we can see:
The highest correlation coefficients are observed between:
Gender distribution is imbalanced: twice more females than males. Age distribution is severely skewed towards young ages. The rest of distributions are also skewed, but not severely, and mostly have two picks.
Now we conduct the linear regression analysis, where “points” is a dependent variable, and explanatory variables are represented by “attitude”, “stra” and “surf” – which have the highest correlation with our target.
# creates a regression model with multiple explanatory variables
my_model <- lm(points ~ attitude + stra + surf, data = learning2014)
# prints a summary of the model
summary(my_model)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
The summary of the model suggests the following interpretation:
However, p-values suggest that only “attitude” is significant in the model on 5% significance level. If you are not familiar with the p-values, they are used to determine statistical significance in a hypothesis test (in our case, if coefficients of linear regression are zero):
where true null stands for the situation when corresponding coefficient is zero (zero influence on “points”).
We fit another model without a “surf” variable:
# creates a regression model with multiple explanatory variables
my_model2 <- lm(points ~ attitude + stra, data = learning2014)
# prints a summary of the model
summary(my_model2)
##
## Call:
## lm(formula = points ~ attitude + stra, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.6436 -3.3113 0.5575 3.7928 10.9295
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 8.9729 2.3959 3.745 0.00025 ***
## attitude 3.4658 0.5652 6.132 6.31e-09 ***
## stra 0.9137 0.5345 1.709 0.08927 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared: 0.2048, Adjusted R-squared: 0.1951
## F-statistic: 20.99 on 2 and 163 DF, p-value: 7.734e-09
Now both variables are significant on 1% significance level (p-values are less than 0.1). The new interpretation:
# set plots' locations
par(mfrow = c(2,2))
# creates diagnostic plots
plot(my_model, which=c(1, 2, 5))
The scatter plot of residuals vs fitted (top-left) illustrates that residuals are equally distributed around zero. Thus, the assumption of homoscedasticity is held: the variance around the regression line is the same for all values of the predictor variable “points”. Q-Q plot of the model residuals (top-right) provides a method to check if the normality of errors assumption (underlying the linear regression) is held. In our case it shows a very reasonable fit. The scatter plot of residuals vs leverage (bottom-left) illustrates the impact single observations have on the model. It’s clearly visible that there are 3 outliers: 35, 77 and 145. However, they don’t severely influence the regression line. We can conclude that our linear model fits the standards.
The following chapter analyses alcohol consumption of students of two Portuguese schools. The data attributes include student grades, demographic, social and school related features, and it was collected by using school reports and questionnaires. The variables’ names can be found below and their detailed description here.The purpose of current analysis is to study the relationships between high/low alcohol consumption and some of the other variables in the data.
alc <- read.csv("/Users/anastasia/IODS-project/data/alc.csv")
dim(alc)
## [1] 382 35
colnames(alc)
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "nursery" "internet" "guardian" "traveltime"
## [16] "studytime" "failures" "schoolsup" "famsup" "paid"
## [21] "activities" "higher" "romantic" "famrel" "freetime"
## [26] "goout" "Dalc" "Walc" "health" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
str(alc)
## 'data.frame': 382 obs. of 35 variables:
## $ school : Factor w/ 2 levels "GP","MS": 1 1 1 1 1 1 1 1 1 1 ...
## $ sex : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
## $ age : int 18 17 15 15 16 16 16 17 15 15 ...
## $ address : Factor w/ 2 levels "R","U": 2 2 2 2 2 2 2 2 2 2 ...
## $ famsize : Factor w/ 2 levels "GT3","LE3": 1 1 2 1 1 2 2 1 2 1 ...
## $ Pstatus : Factor w/ 2 levels "A","T": 1 2 2 2 2 2 2 1 1 2 ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ Mjob : Factor w/ 5 levels "at_home","health",..: 1 1 1 2 3 4 3 3 4 3 ...
## $ Fjob : Factor w/ 5 levels "at_home","health",..: 5 3 3 4 3 3 3 5 3 3 ...
## $ reason : Factor w/ 4 levels "course","home",..: 1 1 3 2 2 4 2 2 2 2 ...
## $ nursery : Factor w/ 2 levels "no","yes": 2 1 2 2 2 2 2 2 2 2 ...
## $ internet : Factor w/ 2 levels "no","yes": 1 2 2 2 1 2 2 1 2 2 ...
## $ guardian : Factor w/ 3 levels "father","mother",..: 2 1 2 2 1 2 2 2 2 2 ...
## $ traveltime: int 2 1 1 1 1 1 1 2 1 1 ...
## $ studytime : int 2 2 2 3 2 2 2 2 2 2 ...
## $ failures : int 0 0 2 0 0 0 0 0 0 0 ...
## $ schoolsup : Factor w/ 2 levels "no","yes": 2 1 2 1 1 1 1 2 1 1 ...
## $ famsup : Factor w/ 2 levels "no","yes": 1 2 1 2 2 2 1 2 2 2 ...
## $ paid : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 1 1 2 2 ...
## $ activities: Factor w/ 2 levels "no","yes": 1 1 1 2 1 2 1 1 1 2 ...
## $ higher : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
## $ romantic : Factor w/ 2 levels "no","yes": 1 1 1 2 1 1 1 1 1 1 ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ freetime : int 3 3 3 2 3 4 4 1 2 5 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ Dalc : int 1 1 2 1 1 1 1 1 1 1 ...
## $ Walc : int 1 1 3 1 2 2 1 1 1 1 ...
## $ health : int 3 3 3 5 5 5 3 1 1 5 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ G1 : int 2 7 10 14 8 14 12 8 16 13 ...
## $ G2 : int 8 8 10 14 12 14 12 9 17 14 ...
## $ G3 : int 8 8 11 14 12 14 12 10 18 14 ...
## $ alc_use : num 1 1 2.5 1 1.5 1.5 1 1 1 1 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
For selecting some interesting variables for future analysis it’s helpful to first visualize them.
##
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
##
## nasa
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
## Warning: attributes are not identical across measure variables;
## they will be dropped
library(corrr)
alc %>% select_if(is.numeric) %>% correlate() %>% focus(alc_use, Dalc, Walc)
##
## Correlation method: 'pearson'
## Missing treated using: 'pairwise.complete.obs'
## # A tibble: 14 x 4
## rowname alc_use Dalc Walc
## <chr> <dbl> <dbl> <dbl>
## 1 age 0.162 0.134 0.157
## 2 Medu 0.00924 0.0454 -0.0173
## 3 Fedu 0.00859 0.0215 -0.00168
## 4 traveltime 0.163 0.160 0.140
## 5 studytime -0.246 -0.186 -0.251
## 6 failures 0.185 0.162 0.174
## 7 famrel -0.121 -0.0930 -0.122
## 8 freetime 0.178 0.197 0.138
## 9 goout 0.387 0.268 0.411
## 10 health 0.0779 0.0625 0.0768
## 11 absences 0.215 0.174 0.210
## 12 G1 -0.176 -0.169 -0.155
## 13 G2 -0.159 -0.151 -0.140
## 14 G3 -0.156 -0.159 -0.131
Based on computed correlation coefficients and my personal reasoning I come up with the following hypothesis:
Now I’m going to visualize them one by one.There are 198 females and 184 males in our dataset, so it’s quite balanced. The following graphs suggest that in general female schoolers consume more alcohol than males, but when it comes to high alcohol consumption (\(\geq 3\)), males take the lead. So my hypothesis is only partially true.
Next we explore relationship between alcohol use and grades. First, I compute mean grade (average of G1, G2, G3).
alc$G <- (alc$G1 + alc$G2 + alc$G3)/3
Now I plot the relationship. The overall trend supports my hypothesis about negative relationship between alcohol use and grades.
I do the same for absences. The overall trend again proves my hypothesis: the higher alcohol consumption, the more absences.
Here it’s clearly seen how going out with friends increases alcohol consumption.
I built a logistic regression model with the binary high/low alcohol consumption variable as the target and the following explanatory variables:
This fitted model says that, holding G, absences and goout at a fixed value, the odds of high alcohol consumption for males (male = 1) over the odds of getting into an honors class for females (female = 0) is exp(0.98033) = 2.593. In other words, high alcohol consumption is 2.6 times more probable for males than for females. The coefficient for G says that, holding sex, absences and goout at a fixed value, we will see a 5% decrease in the odds of high alcohol consumption for a one-unit increase in grades (G) since exp(-0.05877) = 0.943. Holding sex, G and goout at a fixed value, we will see an 8% decrease in the odds of high alcohol consumption for a one-unit increase in absences since exp(0.08105) = 1.084. Holding sex, G and absences at a fixed value, we will see an 103% decrease in the odds of high alcohol consumption for a one-unit increase in goout since exp(0.70547) = 2.025.
my_model <- glm(high_use ~ sex + G + absences + goout, data = alc, family = "binomial")
summary(my_model)
##
## Call:
## glm(formula = high_use ~ sex + G + absences + goout, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.9328 -0.8060 -0.5331 0.8238 2.4661
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.39721 0.74736 -4.546 5.48e-06 ***
## sexM 0.95272 0.25510 3.735 0.000188 ***
## G -0.05877 0.04572 -1.286 0.198594
## absences 0.08105 0.02236 3.625 0.000289 ***
## goout 0.70547 0.12070 5.845 5.07e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 386.09 on 377 degrees of freedom
## AIC: 396.09
##
## Number of Fisher Scoring iterations: 4
coef(my_model)
## (Intercept) sexM G absences goout
## -3.39721198 0.95271930 -0.05877270 0.08105372 0.70546954
odds <- coef(my_model) %>% exp
ci <- confint(my_model) %>% exp
## Waiting for profiling to be done...
cbind(odds, ci)
## odds 2.5 % 97.5 %
## (Intercept) 0.03346645 0.007372191 0.1390862
## sexM 2.59275054 1.581774204 4.3094277
## G 0.94292107 0.861487242 1.0310898
## absences 1.08442915 1.039156246 1.1357634
## goout 2.02479718 1.607264661 2.5826300
From both p-values of my regression model and confidence intervals for odds ratios it is obvious that variable G is not statistically significant (p-value = 0.2 and confidence interval includes one).
Firstly, I remove redundant G variable from my model.
my_model_new <- glm(high_use ~ sex + absences + goout, data = alc, family = "binomial")
summary(my_model_new)
##
## Call:
## glm(formula = high_use ~ sex + absences + goout, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.7871 -0.8153 -0.5446 0.8357 2.4740
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -4.16317 0.47506 -8.764 < 2e-16 ***
## sexM 0.95872 0.25459 3.766 0.000166 ***
## absences 0.08418 0.02237 3.764 0.000167 ***
## goout 0.72981 0.11970 6.097 1.08e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 387.75 on 378 degrees of freedom
## AIC: 395.75
##
## Number of Fisher Scoring iterations: 4
Cross tabulation of predictions versus the actual values:
probabilities <- predict(my_model_new, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)
select(alc, sex, G, absences, goout, high_use, probability, prediction) %>% tail(10)
## sex G absences goout high_use probability prediction
## 373 M 4.000000 0 2 FALSE 0.14869987 FALSE
## 374 M 4.666667 7 3 TRUE 0.39514446 FALSE
## 375 F 12.333333 1 3 FALSE 0.13129452 FALSE
## 376 F 7.000000 6 3 FALSE 0.18714923 FALSE
## 377 F 7.000000 2 2 FALSE 0.07342805 FALSE
## 378 F 11.666667 2 4 FALSE 0.25434555 FALSE
## 379 F 5.333333 2 2 FALSE 0.07342805 FALSE
## 380 F 6.666667 3 1 FALSE 0.03989428 FALSE
## 381 M 12.666667 4 5 TRUE 0.68596604 TRUE
## 382 M 10.666667 2 1 TRUE 0.09060457 FALSE
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 253 15
## TRUE 65 49
As it can be seen from the table, my model correctly classified \(253+49=302\) observations and failed with \(65+15= 80\) observations. That means the train error is 0.21.
g <- ggplot(alc, aes(x = probability, y = high_use, col = prediction))
g + geom_point()
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
## prediction
## high_use FALSE TRUE Sum
## FALSE 0.66230366 0.03926702 0.70157068
## TRUE 0.17015707 0.12827225 0.29842932
## Sum 0.83246073 0.16753927 1.00000000
Again the average number of incorrectly classified observations (train error).
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2094241
library(boot)
set.seed(12345)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model, K = 10)
cv$delta[1]
## [1] 0.2094241
My model has better test set performance compared to that introduced in DataCamp (0.21 < 0.26).
model1 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model1)
I’ll first exclude ‘higher’ variable since it has the highest p-value.
model2 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + internet + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model2)
Next, I exclude ‘internet’.
model3 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model3)
Exclude ‘schoolsup’.
model4 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + guardian + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model4)
Exclude ‘reason’.
model5 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + nursery + guardian + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model5)
Exclude ‘Medu’.
model6 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Fedu + Mjob + Fjob + nursery + guardian + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model6)
Exclude ‘guardian’.
model7 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Fedu + Mjob + Fjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model7)
Exclude ‘Fjob’.
model8 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Fedu + Mjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + G + absences, data = alc, family = "binomial")
# summary(model8)
Exclude G.
model9 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Fedu + Mjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model9)
Exclude P-status.
model10 <- glm(high_use ~ school + sex + age + address + famsize + Fedu + Mjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model10)
Exclude ‘school’.
model11 <- glm(high_use ~ sex + age + address + famsize + Fedu + Mjob + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model11)
Exclude ‘Mjob’.
model12 <- glm(high_use ~ sex + age + address + famsize + Fedu + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model12)
Exclude ‘age’.
model13 <- glm(high_use ~ sex + address + famsize + Fedu + nursery + traveltime + studytime + failures + famsup + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model13)
Exclude ‘famsup’.
model14 <- glm(high_use ~ sex + address + famsize + Fedu + nursery + traveltime + studytime + failures + paid + activities + romantic + famrel + freetime + goout + health + absences, data = alc, family = "binomial")
# summary(model14)
Exclude ‘freetime’.
model15 <- glm(high_use ~ sex + address + famsize + Fedu + nursery + traveltime + studytime + failures + paid + activities + romantic + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model15)
Exclude ‘Fedu’.
model16 <- glm(high_use ~ sex + address + famsize + nursery + traveltime + studytime + failures + paid + activities + romantic + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model16)
Exclude ‘failures’.
model17 <- glm(high_use ~ sex + address + famsize + nursery + traveltime + studytime + paid + activities + romantic + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model17)
Exclude ‘famsize’.
model18 <- glm(high_use ~ sex + address + nursery + traveltime + studytime + paid + activities + romantic + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model18)
Exclude ‘romantic’.
model18 <- glm(high_use ~ sex + address + nursery + traveltime + studytime + paid + activities + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model18)
Exclude ‘nursery’.
model19 <- glm(high_use ~ sex + address + traveltime + studytime + paid + activities + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model19)
Exclude ‘traveltime’.
model20 <- glm(high_use ~ sex + address + studytime + paid + activities + famrel + goout + health + absences, data = alc, family = "binomial")
# summary(model20)
Exclude ‘health’.
model21 <- glm(high_use ~ sex + address + studytime + paid + activities + famrel + goout + absences, data = alc, family = "binomial")
# summary(model21)
Now all the variables in my model are statistically significant. Let’s calculate test errors using cross validation.
set.seed(12345)
cv1 <- cv.glm(data = alc, cost = loss_func, glmfit = model1, K = 10)
cv2 <- cv.glm(data = alc, cost = loss_func, glmfit = model2, K = 10)
cv3 <- cv.glm(data = alc, cost = loss_func, glmfit = model3, K = 10)
cv4 <- cv.glm(data = alc, cost = loss_func, glmfit = model4, K = 10)
cv5 <- cv.glm(data = alc, cost = loss_func, glmfit = model5, K = 10)
cv6 <- cv.glm(data = alc, cost = loss_func, glmfit = model6, K = 10)
cv7 <- cv.glm(data = alc, cost = loss_func, glmfit = model7, K = 10)
cv8 <- cv.glm(data = alc, cost = loss_func, glmfit = model8, K = 10)
cv9 <- cv.glm(data = alc, cost = loss_func, glmfit = model9, K = 10)
cv10 <- cv.glm(data = alc, cost = loss_func, glmfit = model10, K = 10)
cv11 <- cv.glm(data = alc, cost = loss_func, glmfit = model11, K = 10)
cv12 <- cv.glm(data = alc, cost = loss_func, glmfit = model12, K = 10)
cv13 <- cv.glm(data = alc, cost = loss_func, glmfit = model13, K = 10)
cv14 <- cv.glm(data = alc, cost = loss_func, glmfit = model14, K = 10)
cv15 <- cv.glm(data = alc, cost = loss_func, glmfit = model15, K = 10)
cv16 <- cv.glm(data = alc, cost = loss_func, glmfit = model16, K = 10)
cv17 <- cv.glm(data = alc, cost = loss_func, glmfit = model17, K = 10)
cv18 <- cv.glm(data = alc, cost = loss_func, glmfit = model18, K = 10)
cv18 <- cv.glm(data = alc, cost = loss_func, glmfit = model18, K = 10)
cv19 <- cv.glm(data = alc, cost = loss_func, glmfit = model19, K = 10)
cv20 <- cv.glm(data = alc, cost = loss_func, glmfit = model20, K = 10)
cv21 <- cv.glm(data = alc, cost = loss_func, glmfit = model21, K = 10)
test_errors <- c(cv1$delta[1], cv2$delta[1], cv3$delta[1], cv4$delta[1], cv5$delta[1], cv6$delta[1], cv7$delta[1], cv8$delta[1], cv9$delta[1], cv10$delta[1], cv11$delta[1], cv12$delta[1], cv13$delta[1], cv14$delta[1], cv15$delta[1], cv16$delta[1], cv17$delta[1], cv18$delta[1], cv19$delta[1], cv20$delta[1], cv21$delta[1])
And also train errors.
probabilities1 <- predict(model1, type = "response")
probability1 <- probabilities1 > 0.5
probabilities2 <- predict(model2, type = "response")
probabilities3 <- predict(model3, type = "response")
probabilities4 <- predict(model4, type = "response")
probabilities5 <- predict(model5, type = "response")
probabilities6 <- predict(model6, type = "response")
probabilities7 <- predict(model7, type = "response")
probabilities8 <- predict(model8, type = "response")
probabilities9 <- predict(model9, type = "response")
probabilities10 <- predict(model10, type = "response")
probabilities11 <- predict(model11, type = "response")
probabilities12 <- predict(model12, type = "response")
probabilities13 <- predict(model13, type = "response")
probabilities14 <- predict(model14, type = "response")
probabilities15 <- predict(model15, type = "response")
probabilities16 <- predict(model16, type = "response")
probabilities17 <- predict(model17, type = "response")
probabilities18 <- predict(model18, type = "response")
probabilities19 <- predict(model19, type = "response")
probabilities20 <- predict(model20, type = "response")
probabilities21 <- predict(model21, type = "response")
loss1 <- loss_func(class = alc$high_use, prob = probabilities1)
loss2 <- loss_func(class = alc$high_use, prob = probabilities2)
loss3 <- loss_func(class = alc$high_use, prob = probabilities3)
loss4 <- loss_func(class = alc$high_use, prob = probabilities4)
loss5 <- loss_func(class = alc$high_use, prob = probabilities5)
loss6 <- loss_func(class = alc$high_use, prob = probabilities6)
loss7 <- loss_func(class = alc$high_use, prob = probabilities7)
loss8 <- loss_func(class = alc$high_use, prob = probabilities8)
loss9 <- loss_func(class = alc$high_use, prob = probabilities9)
loss10 <- loss_func(class = alc$high_use, prob = probabilities10)
loss11 <- loss_func(class = alc$high_use, prob = probabilities11)
loss12 <- loss_func(class = alc$high_use, prob = probabilities12)
loss13 <- loss_func(class = alc$high_use, prob = probabilities13)
loss14 <- loss_func(class = alc$high_use, prob = probabilities14)
loss15 <- loss_func(class = alc$high_use, prob = probabilities15)
loss16 <- loss_func(class = alc$high_use, prob = probabilities16)
loss17 <- loss_func(class = alc$high_use, prob = probabilities17)
loss18 <- loss_func(class = alc$high_use, prob = probabilities18)
loss19 <- loss_func(class = alc$high_use, prob = probabilities19)
loss20 <- loss_func(class = alc$high_use, prob = probabilities20)
loss21 <- loss_func(class = alc$high_use, prob = probabilities21)
train_errors <- c(loss1, loss2, loss3, loss4, loss5, loss6, loss7, loss8, loss9, loss10, loss11, loss12, loss13, loss14, loss15, loss16, loss17, loss18, loss19, loss20, loss21)
vars <- seq(from = 1, to = 21)
rates <- seq(from = 15, to = 25)
Finally, let’s plot it. Train errors are in red, and test errors are in blue. It’s clearly seen, that since the number of explanatory variables in a model decreases, train error decreases (since at the beginning there were too many redundant variables) and test error is more or less the same here. In general, the more variables, the lower train error and the higher test error due to overfitting.
errors <- data_frame(train_errors, test_errors)
## Warning: `data_frame()` is deprecated, use `tibble()`.
## This warning is displayed once per session.
p = ggplot() +
geom_line(data = errors, aes(x=vars, y = train_errors), color = "blue") +
geom_line(data = errors, aes(x=vars, y = test_errors), color = "red") +
xlab('Models') +
ylab('Error rates')
p
For this week analysis I use Boston dataset from the MASS package. It contains housing values in suburbs of Boston and consists of 506 observations of the following 14 variables:
First, let’s explore the dataset a bit:
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
data("Boston")
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506 14
Let’s have a closer look at variables and their distributions.
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
We can check relationship between variables using a separate correlation plot:
## ── Attaching packages ─────────────────────────────────────────────────────────────────────── tidyverse 1.2.1 ──
## ✔ tibble 2.1.1 ✔ purrr 0.3.2
## ✔ readr 1.3.1 ✔ stringr 1.4.0
## ✔ tibble 2.1.1 ✔ forcats 0.4.0
## ── Conflicts ────────────────────────────────────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ✖ MASS::select() masks dplyr::select()
## corrplot 0.84 loaded
## crim zn indus chas nox rm age dis rad tax
## crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58
## zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31
## indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72
## chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04
## nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67
## rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29
## age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51
## dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53
## rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91
## tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00
## ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46
## black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44
## lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54
## medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47
## ptratio black lstat medv
## crim 0.29 -0.39 0.46 -0.39
## zn -0.39 0.18 -0.41 0.36
## indus 0.38 -0.36 0.60 -0.48
## chas -0.12 0.05 -0.05 0.18
## nox 0.19 -0.38 0.59 -0.43
## rm -0.36 0.13 -0.61 0.70
## age 0.26 -0.27 0.60 -0.38
## dis -0.23 0.29 -0.50 0.25
## rad 0.46 -0.44 0.49 -0.38
## tax 0.46 -0.44 0.54 -0.47
## ptratio 1.00 -0.18 0.37 -0.51
## black -0.18 1.00 -0.37 0.33
## lstat 0.37 -0.37 1.00 -0.74
## medv -0.51 0.33 -0.74 1.00
The highest correlations are observed between:
Since my variables are measured on different scales, for further analysis I have to scale the data, subtracting the column means from the corresponding columns and dividing the difference with standard deviation.
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
boston_scaled <- as.data.frame(boston_scaled)
As it can be seen, all the variables’ means are zeros now after scaling.
I also create a factor variable for numerical crim, categorizing it into high, low and middle rates of crime.
bins <- quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
I remove the initial variable crim and add the new categorical one.
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)
I split my data into test (20%) and train (80%) sets in order to assess the model’s (which I’m going to build) quality. The training of the model will be done with the train set and prediction on new data is done with the test set.
n <- nrow(boston_scaled)
set.seed(12345)
ind <- sample(n, size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]
lda_model <- lda(crime~., data=train)
lda_model
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2475248 0.2574257 0.2475248 0.2475248
##
## Group means:
## zn indus chas nox rm
## low 0.97229754 -0.9597872 -0.154216061 -0.8889207 0.4570563
## med_low -0.08464861 -0.3244885 -0.007331936 -0.5892641 -0.1082762
## med_high -0.39119540 0.1704608 0.200122961 0.3770782 0.1375651
## high -0.48724019 1.0171519 -0.075474056 1.0547756 -0.5001506
## age dis rad tax ptratio
## low -0.8780782 0.9035281 -0.6913090 -0.7279533 -0.39471367
## med_low -0.3707589 0.3894044 -0.5434660 -0.5306788 -0.04341686
## med_high 0.4098247 -0.3529031 -0.4041923 -0.3146912 -0.31203261
## high 0.8165907 -0.8472627 1.6377820 1.5138081 0.78037363
## black lstat medv
## low 0.3857136 -0.7891212 0.51279116
## med_low 0.3160280 -0.1608905 0.01640163
## med_high 0.0563347 -0.0252292 0.17942536
## high -0.6736089 0.9100659 -0.66442722
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.06453072 0.74240382 -0.88262071
## indus 0.06465153 -0.34822618 0.43060847
## chas -0.07437636 -0.09918695 0.12926793
## nox 0.32119190 -0.73282709 -1.30214445
## rm -0.12621343 -0.11008660 -0.18940456
## age 0.24857359 -0.28701321 -0.33705453
## dis -0.06379877 -0.31544376 0.10788004
## rad 3.04902524 0.98466813 0.09844024
## tax 0.12625765 -0.05144994 0.20511932
## ptratio 0.09254812 0.02946943 -0.18922812
## black -0.09973884 0.03443536 0.12859239
## lstat 0.21995230 -0.17129701 0.51685693
## medv 0.22046620 -0.39973922 -0.12354310
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9459 0.0403 0.0139
Prior probabilities are just equal proportions of four groups (1/4). Coefficients mean that the first discriminant function (LD1) is a linear combination of the variables: \(0.065∗zn+0.065∗indus⋯+0.22∗medv\). Proportion of trace is the between group variance. Linear discriminant 1 explains almost 95% of between group variance.
Let’s draw the LDA biplot:
The most influential linear separators for the clusters are rad, zn and nox. I save the correct classes from the test data set, and then remove them from the data frame itself, since I’m going to test my model on it. So this information about correct classification must not be there.
correct_classes <- test$crime
test <- dplyr::select(test, -crime)
Now I will make predictions based on a model:
lda.pred <- predict(lda_model, newdata = test)
And check the quality of prediction with cross-tabulation:
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 14 12 1 0
## med_low 1 17 4 0
## med_high 0 7 18 1
## high 0 0 0 27
It can be seen that the model successfully predicts low and high, but fails with middle rates of crime, since they are probably less separable from each other. It also can be clearly visible from the biplot, that green (med_high) and red (med_low) severely clash.
boston_scaled2 <- scale(Boston)
boston_scaled2 <- as.data.frame(boston_scaled2)
For calculating between the observations I will use the most common Euclidean method.
dist_eu <- dist(boston_scaled2, method = "euclidean")
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
Now I implement k-means algorithm. To determine the optimal number of clusters let’s look at the total of within cluster sum of squares (WCSS) The optimal number of clusters is when the total WCSS drops radically, thus it is 2.
set.seed(123)
# max number of clusters
k_max <- 10
# the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled2, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')
km <- kmeans(boston_scaled2, centers = 2)
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero
Looking at the plot we can see, that in many variables classes are separable indeed, especially when looking at distributions and correlation coefficients (which vary for two classes). Among the most visible and distinguishable differences between classes are:
Let’s perform k-means with 2 clusters.
km_new <- kmeans(boston_scaled2, centers = 3)
I perform LDA using the clusters as target classes.
new_data <- dplyr::select(boston_scaled2, -crim)
new_data <- data.frame(new_data, km_new$cluster)
set.seed(12345)
train_new <- new_data[ind,]
test_new <- new_data[-ind,]
lda_model_new <- lda(km_new.cluster~., data=train_new)
lda_model_new
## Call:
## lda(km_new.cluster ~ ., data = train_new)
##
## Prior probabilities of groups:
## 1 2 3
## 0.2896040 0.4331683 0.2772277
##
## Group means:
## zn indus chas nox rm age
## 1 -0.4872402 1.0650531 -0.03677606 1.1318802 -0.5060351 0.7835135
## 2 -0.3992808 -0.1386820 0.08763438 -0.1782981 -0.1640318 0.1855020
## 3 1.1380713 -0.9938046 -0.13171834 -0.9662319 0.7687324 -1.1415994
## dis rad tax ptratio black lstat
## 1 -0.84490528 1.4132224 1.3907131 0.61746316 -0.6404477 0.91465920
## 2 -0.06275337 -0.5802359 -0.5441956 -0.04043224 0.2486883 -0.05609896
## 3 1.07741104 -0.5901619 -0.6745844 -0.55643005 0.3671677 -0.93177559
## medv
## 1 -0.67072240
## 2 -0.06706528
## 3 0.84549682
##
## Coefficients of linear discriminants:
## LD1 LD2
## zn 0.043309481 0.83222632
## indus -0.272397029 0.01729030
## chas 0.004084862 -0.17907457
## nox -0.795442248 0.46890783
## rm 0.110569407 0.39122239
## age 0.082133532 -0.93047496
## dis 0.051331466 0.31679703
## rad -1.526920272 0.77083066
## tax -0.858771883 0.18114143
## ptratio -0.055424237 0.01223941
## black 0.042493264 -0.02976446
## lstat -0.376381397 0.43131410
## medv 0.004158310 0.53794043
##
## Proportion of trace:
## LD1 LD2
## 0.8753 0.1247
Coefficients mean that the first discriminant function (LD1) is a linear combination of the variables: \(0.043∗zn-0.27∗indus⋯+0.004∗medv\) etc. Let’s plot:
classes_new <- as.numeric(train_new$km_new.cluster)
plot(lda_model_new, dimen = 2, col = classes_new, pch = classes_new)
lda.arrows(lda_model_new, myscale = 1)
This time the most influential linear separators for the clusters are rad, age and zn.
correct_classes_new <- test_new$km_new.cluster
test_new <- dplyr::select(test_new, -km_new.cluster)
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda_model$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda_model$scaling
matrix_product <- as.data.frame(matrix_product)
Plotly graph:
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
km_cluster <- as.data.frame(km$cluster)
km_set <- km_cluster[ind,]
Firstly, the number of groups is different, since for k-means I set two clusters only. But what we can see is that one cluster distinguishes a lot, and the rest of observations are less separable.